Add configprovider to openai text generation and model config#1043
Add configprovider to openai text generation and model config#1043JaJamyG wants to merge 11 commits intomicrosoft:mainfrom
Conversation
@microsoft-github-policy-service agree |
|
fixes #1042 |
applications/evaluation/Evaluators/Faithfulness/FaithfulnessEvaluator.cs
Show resolved
Hide resolved
dluc
left a comment
There was a problem hiding this comment.
Did you try overriding the model?
I tested the PR, and requests are always sent to the model defined in the configuration, not the one in the context.
I suspect that the client used internally (Semantic Kernel) doesn't support passing the model ID in the request.
|
Sorry for the long delay. The PR slipped my mind, but I’ve tested it now. I want to create a small project where I can test it in more detail. |
|
Closing as part of repository maintenance - no further action planned on this issue. |
Motivation and Context (Why the change? What's the scenario?)
I want to be able to change models from openai text generation in runtime.
High level description (Approach, Design)
The ability to change gpt models in runtime, to ask question to open ai